5 research outputs found

    Cholesky-factorized sparse Kernel in support vector machines

    Get PDF
    Support Vector Machine (SVM) is one of the most powerful machine learning algorithms due to its convex optimization formulation and handling non-linear classification. However, one of its main drawbacks is the long time it takes to train large data sets. This limitation is often aroused when applying non-linear kernels (e.g. RBF Kernel) which are usually required to obtain better separation for linearly inseparable data sets. In this thesis, we study an approach that aims to speed-up the training time by combining both the better performance of RBF kernels and fast training by a linear solver, LIBLINEAR. The approach uses an RBF kernel with a sparse matrix which is factorized using Cholesky decomposition. The method is tested on large artificial and real data sets and compared to the standard RBF and linear kernels where both the accuracy and training time are reported. For most data sets, the result shows a huge training time reduction, over 90\%, whilst maintaining the accuracy

    Generating Infinite-Resolution Texture using GANs with Patch-by-Patch Paradigm

    Full text link
    In this paper, we introduce a novel approach for generating texture images of infinite resolutions using Generative Adversarial Networks (GANs) based on a patch-by-patch paradigm. Existing texture synthesis techniques often rely on generating a large-scale texture using a one-forward pass to the generating model, this limits the scalability and flexibility of the generated images. In contrast, the proposed approach trains GANs models on a single texture image to generate relatively small patches that are locally correlated and can be seamlessly concatenated to form a larger image while using a constant GPU memory footprint. Our method learns the local texture structure and is able to generate arbitrary-size textures, while also maintaining coherence and diversity. The proposed method relies on local padding in the generator to ensure consistency between patches and utilizes spatial stochastic modulation to allow for local variations and diversity within the large-scale image. Experimental results demonstrate superior scalability compared to existing approaches while maintaining visual coherence of generated textures

    Generation of non-stationary stochastic fields using Generative Adversarial Networks with limited training data

    Full text link
    In the context of generating geological facies conditioned on observed data, samples corresponding to all possible conditions are not generally available in the training set and hence the generation of these realizations depends primary on the generalization capability of the trained generative model. The problem becomes more complex when applied on non-stationary fields. In this work, we investigate the problem of training Generative Adversarial Networks (GANs) models against a dataset of geological channelized patterns that has a few non-stationary spatial modes and examine the training and self-conditioning settings that improve the generalization capability at new spatial modes that were never seen in the given training set. The developed training method allowed for effective learning of the correlation between the spatial conditions (i.e. non-stationary maps) and the realizations implicitly without using additional loss terms or solving a costly optimization problem at the realization generation phase. Our models, trained on real and artificial datasets were able to generate geologically-plausible realizations beyond the training samples with a strong correlation with the target maps

    Cholesky-factorized sparse Kernel in support vector machines

    No full text
    Support Vector Machine (SVM) is one of the most powerful machine learning algorithms due to its convex optimization formulation and handling non-linear classification. However, one of its main drawbacks is the long time it takes to train large data sets. This limitation is often aroused when applying non-linear kernels (e.g. RBF Kernel) which are usually required to obtain better separation for linearly inseparable data sets. In this thesis, we study an approach that aims to speed-up the training time by combining both the better performance of RBF kernels and fast training by a linear solver, LIBLINEAR. The approach uses an RBF kernel with a sparse matrix which is factorized using Cholesky decomposition. The method is tested on large artificial and real data sets and compared to the standard RBF and linear kernels where both the accuracy and training time are reported. For most data sets, the result shows a huge training time reduction, over 90\%, whilst maintaining the accuracy

    Generating unrepresented proportions of geological facies using Generative Adversarial Networks

    No full text
    In this work, we investigate the capacity of Generative Adversarial Networks (GANs) in interpolating and extrapolating facies proportions in a geological dataset. The new generated realizations with unrepresented (aka. missing) proportions are assumed to belong to the same original data distribution. Specifically, we design a conditional GANs model that can drive the generated facies toward new proportions not found in the training set. The presented study includes an investigation of various training settings and model architectures. In addition, we devised new conditioning routines for an improved generation of the missing samples. The presented numerical experiments on images of binary and multiple facies showed good geological consistency as well as strong correlation with the target conditions
    corecore